The paper studies a general class of distributed dictionary learning (DL) problems where the learning task is distributed over a multi-agent network with (possibly) time-varying (non-symmetric) connectivity. This setting is relevant, for instance, in scenarios where massive amounts of data are not collocated but collected/stored in different spatial locations. We develop a unified distributed algorithmic framework for this class of non-convex problems and establish its asymptotic convergence. The new method hinges on Successive Convex Approximation (SCA) techniques while leveraging a novel broadcast protocol to disseminate information and distribute the computation over the network, which neither requires the double-stochasticity of the consensus matrices nor the knowledge of the graph sequence to implement. To the best of our knowledge, this is the first distributed scheme with provable convergence for DL (and more generally bi-convex) problems, over (time-varying) digraphs

D2L: Decentralized dictionary learning over dynamic networks / Daneshmand, A.; Sun, Y.; Scutari, G.; Facchinei, F.. - STAMPA. - (2017), pp. 4084-4088. (Intervento presentato al convegno 2017 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2017 tenutosi a New Orleans; United States nel 2017) [10.1109/ICASSP.2017.7952924].

D2L: Decentralized dictionary learning over dynamic networks

Facchinei, F.
2017

Abstract

The paper studies a general class of distributed dictionary learning (DL) problems where the learning task is distributed over a multi-agent network with (possibly) time-varying (non-symmetric) connectivity. This setting is relevant, for instance, in scenarios where massive amounts of data are not collocated but collected/stored in different spatial locations. We develop a unified distributed algorithmic framework for this class of non-convex problems and establish its asymptotic convergence. The new method hinges on Successive Convex Approximation (SCA) techniques while leveraging a novel broadcast protocol to disseminate information and distribute the computation over the network, which neither requires the double-stochasticity of the consensus matrices nor the knowledge of the graph sequence to implement. To the best of our knowledge, this is the first distributed scheme with provable convergence for DL (and more generally bi-convex) problems, over (time-varying) digraphs
2017
2017 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2017
Dictionary Learning; distributed algorithms; nonconvex optimization; time-varying networks; Software; Signal Processing; Electrical and Electronic Engineering
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
D2L: Decentralized dictionary learning over dynamic networks / Daneshmand, A.; Sun, Y.; Scutari, G.; Facchinei, F.. - STAMPA. - (2017), pp. 4084-4088. (Intervento presentato al convegno 2017 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2017 tenutosi a New Orleans; United States nel 2017) [10.1109/ICASSP.2017.7952924].
File allegati a questo prodotto
File Dimensione Formato  
Daneshmand_D2L_2017.pdf

solo gestori archivio

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 429.04 kB
Formato Adobe PDF
429.04 kB Adobe PDF   Contatta l'autore

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1083841
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 2
  • ???jsp.display-item.citation.isi??? 2
social impact